AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Cross-lingual pretraining

# Cross-lingual pretraining

Wav2vec2 Large Xlsr 53 842h Luxembourgish 14h
MIT
A large wav2vec2.0 model fine-tuned with 842 hours of unlabeled and 14 hours of labeled Luxembourgish speech data, supporting Luxembourgish speech recognition
Speech Recognition Transformers Other
W
Lemswasabi
204
0
Wav2vec2 Large Xlsr 53 Greek
Apache-2.0
This is a Greek automatic speech recognition model based on the XLSR-Wav2Vec2 architecture, developed by the Hellenic Military Academy and the Technical University of Crete.
Speech Recognition Other
W
lighteternal
443
8
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase